Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Reducing buildings’ carbon emissions is an important sustainability challenge. While scheduling flexible building loads has been previously used for a variety of grid and energy optimizations, carbon footprint reduction using such flexible loads poses new challenges since such methods need to balance both energy and carbon costs while also reducing user inconvenience from delaying such loads. This article highlights the potential conflict between electricity prices and carbon emissions and the resulting tradeoffs in carbon-aware and cost-aware load scheduling. To address this tradeoff, we propose GreenThrift, a home automation system that leverages the scheduling capabilities of smart appliances and knowledge of future carbon intensity and cost to reduce both the carbon emissions and costs of flexible energy loads. At the heart of GreenThrift is an optimization technique that automatically computes schedules based on user configurations and preferences. We evaluate the effectiveness of GreenThrift using real-world carbon intensity data, electricity prices, and load traces from multiple locations and across different scenarios and objectives. Our results show that GreenThrift can replicate the offline optimal and retains 97% of the savings when optimizing the carbon emissions. Moreover, we show how GreenThrift can balance the conflict between carbon and cost and retain 95.3% and 85.5% of the potential carbon and cost savings, respectively.more » « lessFree, publicly-accessible full text available June 30, 2026
- 
            Free, publicly-accessible full text available June 16, 2026
- 
            Free, publicly-accessible full text available May 6, 2026
- 
            We introduce and study spatiotemporal online allocation with deadline constraints (SOAD), a new online problem motivated by emerging challenges in sustainability and energy. In SOAD, an online player completes a workload by allocating and scheduling it on the points of a metric space (X,d) while subject to a deadlineT. At each time step, a service cost function is revealed that represents the cost of servicing the workload at each point, and the player must irrevocably decide the current allocation of work to points. Whenever the player moves this allocation, they incur a movement cost defined by the distance metricd(⋅, ⋅) that captures, e.g., an overhead cost. SOAD formalizes the open problem of combining general metrics and deadline constraints in the online algorithms literature, unifying problems such as metrical task systems and online search. We propose a competitive algorithm for SOAD along with a matching lower bound establishing its optimality. Our main algorithm, ST-CLIP, is a learning-augmented algorithm that takes advantage of predictions (e.g., forecasts of relevant costs) and achieves an optimal consistency-robustness trade-off. We evaluate our proposed algorithms in a simulated case study of carbon-aware spatiotemporal workload management, an application in sustainable computing that schedules a delay-tolerant batch compute job on a distributed network of data centers. In these experiments, we show that ST-CLIP substantially improves on heuristic baseline methods.more » « lessFree, publicly-accessible full text available March 6, 2026
- 
            Free, publicly-accessible full text available November 20, 2025
- 
            Content Delivery Networks (CDNs) are Internet-scale systems that deliver streaming and web content to users from many geographically distributed edge data centers. Since large CDNs can comprise hundreds of thousands of servers deployed in thousands of global data centers, they can consume a large amount of energy for their operations and thus are responsible for large amounts of Green House Gas (GHG) emissions. As these networks scale to cope with increased demand for bandwidth-intensive content, their emissions are expected to rise further, making sustainable design and operation an important goal for the future. Since different geographic regions vary in the carbon intensity and cost of their electricity supply, in this paper, we consider spatial shifting as a key technique to jointly optimize the carbon emissions and energy costs of a CDN. We present two forms of shifting: spatial load shifting, which operates within the time scale of minutes, and VM capacity shifting, which operates at a coarse time scale of days or weeks. The proposed techniques jointly reduce carbon and electricity costs while considering the performance impact of increased request latency from such optimizations. Using real-world traces from a large CDN and carbon intensity and energy prices data from electric grids in different regions, we show that increasing the latency by 60ms can reduce carbon emissions by up to 35.5%, 78.6%, and 61.7% across the US, Europe, and worldwide, respectively. In addition, we show that capacity shifting can increase carbon savings by up to 61.2%. Finally, we analyze the benefits of spatial shifting and show that it increases carbon savings from added solar energy by 68% and 130% in the US and Europe, respectively.more » « lessFree, publicly-accessible full text available November 20, 2025
- 
            As solar electricity has become cheaper than the retail electricity price, residential consumers are trying to reduce costs by meeting more demand using solar energy. One way to achieve this is to invest in the solar infrastructure collaboratively. When houses form a coalition, houses with high solar potential or surplus roof capacity can install more panels and share the generated solar energy with others, lowering the total cost. Fair sharing of the resulting cost savings across the houses is crucial to prevent the coalition from breaking. However, estimating the fair share of each house is complex as houses contribute different amounts of generation and demand in the coalition, and rooftop solar generation across houses with similar roof capacities can vary widely. In this paper, we present HeliosFair, a system that minimizes the total electricity costs of a community that shares solar energy and then uses Shapley values to fairly distribute the cost savings thus obtained. Using real-world data, we show that the joint CapEx and OpEx electricity costs of a community sharing solar can be reduced by 12.7% on average (11.3% on average with roof capacity constraints) over houses installing solar energy individually. Our Shapley-value-based approach can fairly distribute these savings across houses based on their contributions towards cost reduction, while commonly used ad hoc approaches are unfair under many scenarios. HeliosFair is also the first work to consider practical constraints such as the difference in solar potential across houses, rooftop capacity and weight of solar panels, making it deployable in practice.more » « less
- 
            As computing demand continues to grow, minimizing its environmental impact has become crucial. This paper presents a study on carbon-aware scheduling algorithms, focusing on reducing carbon emissions of delay-tolerant batch workloads. Inspired by the Follow the Leader strategy, we introduce a simple yet efficient meta-algorithm, called FTL, that dynamically selects the most efficient scheduling algorithm based on real-time data and historical performance. Without fine-tuning and parameter optimization, FTL adapts to variability in job lengths, carbon intensity forecasts, and regional energy characteristics, consistently outperforming traditional carbon-aware scheduling algorithms. Through extensive experiments using real-world data traces, FTL achieves 8.2% and 14% improvement in average carbon footprint reduction over the closest runner-up algorithm and the carbon-agnostic algorithm, respectively, demonstrating its efficacy in minimizing carbon emissions across multiple geographical regions.1more » « lessFree, publicly-accessible full text available December 1, 2025
- 
            Cloud platforms’ rapid growth raises significant concerns about their electricity consumption and resulting carbon emissions. Power capping is a known technique for limiting the power consumption of data centers where workloads are hosted. Today’s data center computer clusters co-locate latency-sensitive web and throughput-oriented batch workloads. When power capping is necessary, throttling only the batch tasks without restricting latency-sensitive web workloads is ideal because guaranteeing low response time for latency-sensitive workloads is a must due to Service-Level Objectives (SLOs) requirements. This paper proposes PADS, a hardware-agnostic workload-aware power capping system. Due to not relying on any hardware mechanism such as RAPL and DVFS, it can keep the power consumption of clusters equipped with heterogeneous architectures such as x86 and ARM below the enforced power limit while minimizing the impact on latency-sensitive tasks. It uses an application-performance model of both latency-sensitive and batch workloads to ensure power safety with controllable performance. Our power capping technique uses diagonal scaling and relies on using the control group feature of the Linux kernel. Our results indicate that PADS is highly effective in reducing power while respecting the tail latency requirement of the latency-sensitive workload. Furthermore, compared to state-of-the-art solutions, PADS demonstrates lower P95 latency, accompanied by a 90% higher effectiveness in respecting power limits.more » « lessFree, publicly-accessible full text available November 2, 2025
- 
            Reducing tail latency has become a crucial issue for optimizing the performance of online cloud services and distributed applications. In distributed applications, there are many causes of high end-to-end tail latency, including operating system delays, request re-ordering due to fan-out/fanin, and network congestion. Although recent research has focused on reducing tail latency for individual application components, such as by replicating requests and scheduling, in this paper, we argue for a holistic approach for reducing the end-to-end tail latency across application components. We propose TailClipper, a distributed scheduler that tags each arriving request with an arrival timestamp, and propagates it across the microservices' call chain. TailClipper then uses arrival timestamps to implement an oldest request first scheduler that combines global first-come first serve with a limited form of processor sharing to reduce end-to-end tail latency. In doing so, TailClipper can counter the performance degradation caused by request reordering in multi-tiered and microservices-based applications. We implement TailClipper as a userspace Linux scheduler and evaluate it using cloud workload traces and a real-world microservices application. Compared to state-of-the-art schedulers, our experiments reveal that TailClipper improves the 99th percentile response time by up to 81%, while also improving the mean response time and the system throughput by up to 54% and 29% respectively under high loads.more » « lessFree, publicly-accessible full text available November 20, 2025
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
